492 research outputs found

    A Mean-field Approach for an Intercarrier Interference Canceller for OFDM

    Full text link
    The similarity of the mathematical description of random-field spin systems to orthogonal frequency-division multiplexing (OFDM) scheme for wireless communication is exploited in an intercarrier-interference (ICI) canceller used in the demodulation of OFDM. The translational symmetry in the Fourier domain generically concentrates the major contribution of ICI from each subcarrier in the subcarrier's neighborhood. This observation in conjunction with mean field approach leads to a development of an ICI canceller whose necessary cost of computation scales linearly with respect to the number of subcarriers. It is also shown that the dynamics of the mean-field canceller are well captured by a discrete map of a single macroscopic variable, without taking the spatial and time correlations of estimated variables into account.Comment: 7pages, 3figure

    Critical Noise Levels for LDPC decoding

    Get PDF
    We determine the critical noise level for decoding low density parity check error correcting codes based on the magnetization enumerator (\cM), rather than on the weight enumerator (\cW) employed in the information theory literature. The interpretation of our method is appealingly simple, and the relation between the different decoding schemes such as typical pairs decoding, MAP, and finite temperature decoding (MPM) becomes clear. In addition, our analysis provides an explanation for the difference in performance between MN and Gallager codes. Our results are more optimistic than those derived via the methods of information theory and are in excellent agreement with recent results from another statistical physics approach.Comment: 9 pages, 5 figure

    Thouless-Anderson-Palmer Approach for Lossy Compression

    Full text link
    We study an ill-posed linear inverse problem, where a binary sequence will be reproduced using a sparce matrix. According to the previous study, this model can theoretically provide an optimal compression scheme for an arbitrary distortion level, though the encoding procedure remains an NP-complete problem. In this paper, we focus on the consistency condition for a dynamics model of Markov-type to derive an iterative algorithm, following the steps of Thouless-Anderson-Palmer's. Numerical results show that the algorithm can empirically saturate the theoretical limit for the sparse construction of our codes, which also is very close to the rate-distortion function.Comment: 10 pages, 3 figure

    Statistical mechanics of typical set decoding

    Get PDF
    The performance of ``typical set (pairs) decoding'' for ensembles of Gallager's linear code is investigated using statistical physics. In this decoding, error happens when the information transmission is corrupted by an untypical noise or two or more typical sequences satisfy the parity check equation provided by the received codeword for which a typical noise is added. We show that the average error rate for the latter case over a given code ensemble can be tightly evaluated using the replica method, including the sensitivity to the message length. Our approach generally improves the existing analysis known in information theory community, which was reintroduced by MacKay (1999) and believed as most accurate to date.Comment: 7 page

    Survey propagation for the cascading Sourlas code

    Full text link
    We investigate how insights from statistical physics, namely survey propagation, can improve decoding of a particular class of sparse error correcting codes. We show that a recently proposed algorithm, time averaged belief propagation, is in fact intimately linked to a specific survey propagation for which Parisi's replica symmetry breaking parameter is set to zero, and that the latter is always superior to belief propagation in the high connectivity limit. We briefly look at further improvements available by going to the second level of replica symmetry breaking.Comment: 14 pages, 5 figure

    Statistical Mechanics of Dictionary Learning

    Full text link
    Finding a basis matrix (dictionary) by which objective signals are represented sparsely is of major relevance in various scientific and technological fields. We consider a problem to learn a dictionary from a set of training signals. We employ techniques of statistical mechanics of disordered systems to evaluate the size of the training set necessary to typically succeed in the dictionary learning. The results indicate that the necessary size is much smaller than previously estimated, which theoretically supports and/or encourages the use of dictionary learning in practical situations.Comment: 6 pages, 4 figure

    Analysis of common attacks in LDPCC-based public-key cryptosystems

    Get PDF
    We analyze the security and reliability of a recently proposed class of public-key cryptosystems against attacks by unauthorized parties who have acquired partial knowledge of one or more of the private key components and/or of the plaintext. Phase diagrams are presented, showing critical partial knowledge levels required for unauthorized decryptionComment: 14 pages, 6 figure

    Error-correcting code on a cactus: a solvable model

    Get PDF
    An exact solution to a family of parity check error-correcting codes is provided by mapping the problem onto a Husimi cactus. The solution obtained in the thermodynamic limit recovers the replica symmetric theory results and provides a very good approximation to finite systems of moderate size. The probability propagation decoding algorithm emerges naturally from the analysis. A phase transition between decoding success and failure phases is found to coincide with an information-theoretic upper bound. The method is employed to compare Gallager and MN codes.Comment: 7 pages, 3 figures, with minor correction

    Statistical Mechanics of Broadcast Channels Using Low Density Parity Check Codes

    Get PDF
    We investigate the use of Gallager's low-density parity-check (LDPC) codes in a broadcast channel, one of the fundamental models in network information theory. Combining linear codes is a standard technique in practical network communication schemes and is known to provide better performance than simple timesharing methods when algebraic codes are used. The statistical physics based analysis shows that the practical performance of the suggested method, achieved by employing the belief propagation algorithm, is superior to that of LDPC based timesharing codes while the best performance, when received transmissions are optimally decoded, is bounded by the timesharing limit.Comment: 14 pages, 4 figure
    corecore